92 research outputs found

    Relational extensions to feature logic: applications to constraint based grammars

    Get PDF
    This thesis investigates the logical and computational foundations of unification-based or more appropriately constraint based grammars. The thesis explores extensions to feature logics (which provide the basic knowledge representation services to constraint based grammars) with multi-valued or relational features. These extensions are useful for knowledge representation tasks that cannot be expressed within current feature logics.The approach bridges the gap between concept languages (such as KL-ONE), which are the mainstay of knowledge representation languages in AI, and feature logics. Va¬ rious constraints on relational attributes are considered such as existential membership, universal membership, set descriptions, transitive relations and linear precedence con¬ straints.The specific contributions of this thesis can be summarised as follows: 1. Development of an integrated feature/concept logic 2. Development of a constraint logic for so called partial set descriptions 3. Development of a constraint logic for expressing linear precedence constraints 4. The design of a constraint language CL-ONE that incorporates the central ideas provided by the above study 5. A study of the application of CL-ONE for constraint based grammarsThe thesis takes into account current insights in the areas of constraint logic programming, object-oriented languages, computational linguistics and knowledge representation

    FocusNet: An attention-based Fully Convolutional Network for Medical Image Segmentation

    Get PDF
    We propose a novel technique to incorporate attention within convolutional neural networks using feature maps generated by a separate convolutional autoencoder. Our attention architecture is well suited for incorporation with deep convolutional networks. We evaluate our model on benchmark segmentation datasets in skin cancer segmentation and lung lesion segmentation. Results show highly competitive performance when compared with U-Net and it's residual variant

    Learning predictive categories using lifted relational neural networks

    Get PDF
    Lifted relational neural networks (LRNNs) are a flexible neural-symbolic framework based on the idea of lifted modelling. In this paper we show how LRNNs can be easily used to specify declaratively and solve learning problems in which latent categories of entities, properties and relations need to be jointly induced

    FocusNet++: Attentive Aggregated Transformations for Efficient and Accurate Medical Image Segmentation

    Get PDF
    We propose a new residual block for convolutional neural networks and demonstrate its state-of-the-art performance in medical image segmentation. We combine attention mechanisms with group convolutions to create our group attention mechanism, which forms the fundamental building block of our network, FocusNet++. We employ a hybrid loss based on balanced cross entropy, Tversky loss and the adaptive logarithmic loss to enhance the performance along with fast convergence. Our results show that FocusNet++ achieves state-of-the-art results across various benchmark metrics for the ISIC 2018 melanoma segmentation and the cell nuclei segmentation datasets with fewer parameters and FLOPs.Comment: Published at ISBI 202
    • …
    corecore